翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

temporal difference learning : ウィキペディア英語版
temporal difference learning
Temporal difference (TD) learning is a prediction-based machine learning method. It has primarily been used for the reinforcement learning problem, and is said to be "a combination of Monte Carlo ideas and dynamic programming (DP) ideas." TD resembles a Monte Carlo method because it learns by sampling the environment according to some ''policy'', and is related to dynamic programming techniques as it approximates its current estimate based on previously learned estimates (a process known as bootstrapping). The TD learning algorithm is related to the temporal difference model of animal learning.
As a prediction method, TD learning considers that subsequent predictions are often correlated in some sense. In standard supervised predictive learning, one learns only from actually observed values: A prediction is made, and when the observation is available, the prediction is adjusted to better match the observation. As elucidated by Richard Sutton, the core idea of TD learning is that one adjusts predictions to match other, more accurate, predictions about the future.〔 (A revised version is available on (Richard Sutton's publication page ))〕 This procedure is a form of bootstrapping, as illustrated with the following example:
: "Suppose you wishes to predict the weather for Saturday, and you have some model that predicts Saturday's weather, given the weather of each day in the week. In the standard case, you would wait until Saturday and then adjust all your models. However, when it is, for example, Friday, you should have a pretty good idea of what the weather would be on Saturday - and thus be able to change, say, Monday's model before Saturday arrives."〔
Mathematically speaking, both in a standard and a TD approach, one would try to optimize some cost function, related to the error in our predictions of the expectation of some random variable, E(). However, while in the standard approach one in some sense assumes E() = z (the actual observed value), in the TD approach we use a model. For the particular case of reinforcement learning, which is the major application of TD methods, z is the total return and E() is given by the Bellman equation of the return.
== Mathematical formulation ==
Let r_t be the reinforcement on time step ''t''. Let \bar V_t be the correct prediction that is equal to the discounted sum of all future reinforcement. The discounting is done by powers of factor of \gamma such that reinforcement at distant time step is less important.
: \bar V_t = \sum_^ \gamma^i r_
where 0 \le \gamma < 1 .
This formula can be expanded
: \bar V_t = r_ + \sum_^ \gamma^i r_
by changing the index of i to start from 0.
: \bar V_t = r_ + \sum_^ \gamma^ r_
: \bar V_t = r_ + \gamma \sum_^ \gamma^ r_
: \bar V_t = r_ + \gamma \bar V_
Thus, the reinforcement is the difference between the correct prediction and the current prediction.
: r_ = \bar V_ - \gamma \bar V_

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「temporal difference learning」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.